异质图神经网络(HGNN)提供了强大的能力,可以将异质图的丰富结构和语义信息嵌入到低维节点表示中。现有的HGNN通常会学习使用层次结构注意机制和重复的邻居聚集来嵌入信息,并遭受不必要的复杂性和冗余计算。本文提出了简单有效的异质图神经网络(SEHGNN),该图通过避免在相同关系中避免过度使用的节点级别的注意来降低这种过度的复杂性,并在预处理阶段预先计算邻居聚集。与以前的工作不同,Sehgnn利用轻重量参数的邻居聚合器来学习每个Metapath的结构信息,以及一个基于变压器的语义聚合器将跨Metapaths的语义信息组合为每个节点的最终嵌入。结果,SEHGNN提供了简单的网络结构,高预测准确性和快速训练速度。在五个现实世界的异质图上进行了广泛的实验,证明了Sehgnn在准确性和训练速度上的优越性。代码可在https://github.com/ict-gimlab/sehgnn上找到。
translated by 谷歌翻译
Previous work on controllable text generation has explored the idea of control from the latent space, such as optimizing a representation with attribute-related classifiers or sampling a representation from relevant discrete samples. However, they are not effective enough in modeling both the latent space and the control, leaving controlled text with low quality and diversity. In this work, we propose a novel control framework using probability density estimation in the latent space. Our method utilizes an invertible transformation function, the Normalizing Flow, that maps the complex distributions in the latent space to simple Gaussian distributions in the prior space. Thus, we can perform sophisticated and flexible control in the prior space and feed the control effects back into the latent space owing to the one-one-mapping property of invertible transformations. Experiments on single-attribute controls and multi-attribute control reveal that our method outperforms several strong baselines on attribute relevance and text quality and achieves the SOTA. Further analysis of control strength adjustment demonstrates the flexibility of our control strategy.
translated by 谷歌翻译
选择模型一直是研究经济学,市场营销,运营研究和心理学等许多领域的个人偏好或实用性的核心主题。尽管有关选择模型的绝大多数文献都致力于导致管理和政策制定见解的分析属性,但从经验数据中学习选择模型的现有方法通常在计算上棘手或效率低下。在本文中,我们在两个选择模型的两个设置下开发了基于学习的选择模型:(i)无功能和(ii)基于功能。我们的模型既捕获了每个候选选择的内在效用,又捕获了分类对选择概率的影响。合成和真实数据实验证明了拟议模型的性能,从现有选择模型的恢复,样本复杂性,分类效果,体系结构设计和模型解释方面。
translated by 谷歌翻译
随着流媒体技术的发展,沟通的增加取决于声音和视觉信息,这给在线媒体带来了巨大的负担。数据压缩对于减少数据传输和存储的数量变得越来越重要。为了进一步提高图像压缩的效率,研究人员利用各种图像处理方法来补偿常规编解码器和基于先进的基于学习的压缩方法的局限性。我们没有修改面向压缩的方法,而是提出了一个称为Kuchen的统一图像压缩预处理框架,该框架旨在进一步提高现有编解码器的性能。该框架由混合数据标记系统以及基于学习的主链组成,以模拟个性化的预处理。据我们所知,这是在图像压缩任务中设置统一预处理基准测试的第一次探索。结果表明,我们统一的预处理框架优化的现代编解码器不断提高最新压缩的效率。
translated by 谷歌翻译
以下序列出售了许多产品:首先显示焦点产品,如果购买客户,则显示一种或多种辅助产品以供购买。一个突出的例子是出售航空票,首先显示航班,并在选择时出售了许多辅助机构,例如机舱或袋装选项,座位选择,保险等。该公司必须决定销售格式 - 是按串联捆绑或作为捆绑销售的形式出售 - 以及如何分别或捆绑产品为焦点和辅助产品定价。由于仅在购买焦点产品后才考虑辅助性,因此公司选择的销售策略会在产品之间创建信息和学习依赖性:例如,仅提供一套捆绑包将排除学习客户对焦点的估值和辅助产品。在本文中,我们在以下情况下研究了这种焦点和辅助项目组合的学习策略:(a)纯捆绑向所有客户捆绑,(b)个性化机制,在其中,根据客户的某些观察到的功能,这两种产品都会呈现并以捆绑包或顺序定价,(c)最初(适用于所有客户),并在地平线期间永久切换(如果更有利可图)。我们为所有三种情况设计定价和决策算法,遗憾的是由$ o(d \ sqrt {t} \ log t)$限制,以及第三种情况的最佳切换时间。
translated by 谷歌翻译
变形金刚在NLP和计算机视觉上实现了突破,最近开始在自动驾驶汽车(AV)的轨迹预测中表现出有希望的表现。如何有效地对自我代理与其他道路和动态对象之间的交互关系建模仍然对标准注意模块仍然具有挑战性。在这项工作中,我们提出了一个类似变压器的架构模块MNM网络,该网络配备了新型掩盖的目标调节训练程序,用于AV轨迹预测。最终的模型名为高尔夫球手,取得了最先进的性能,在2022 Waymo Open DataSet Motion Predict挑战中赢得了第二名,并根据Minade排名第一。
translated by 谷歌翻译
我们在广义线性需求模型下考虑与协变量的动态定价问题:卖方可以在$ T $时间段的地平线上动态调整产品的价格,并在每次$ T $时,产品的需求是通过未知的广义线性模型共同由价格和可观察的协变量矢量$ x_t \ in \ mathbb {r} ^ d $。现有文献中的大多数假设协变量矢量$ X_T $的独立和相同分布(i.i.d.);少数论文放松这种假设牺牲模型一般性或产生了次优遗憾的界限。在本文中,我们显示简单的定价算法有$ O(D \ SQRT {T} \ log t)$后悔上限而不假设协变量上的任何统计结构$ x_t $(甚至可以任意选择)。遗憾的上限与对数因子的下限(即使是i.i.d.假设)匹配。我们的论文如此表明(i)i.i.d.获得低遗憾的假设是不需要的,(ii)遗憾的遗憾可以独立于$ x_t $'s的协方差矩阵的(逆)最小特征值,以往的界限。此外,我们讨论了一个更好的遗憾,可以实现更好的遗憾以及如何应用汤普森采样算法来提供价格的有效计算。
translated by 谷歌翻译
我们考虑一个一般的在线随机优化问题,在有限时间段的视野中具有多个预算限制。在每个时间段内,都会揭示奖励功能和多个成本功能,并且决策者需要从凸面和紧凑型措施中指定行动,以收集奖励并消耗预算。每个成本函数对应于一个预算的消费。在每个时期,奖励和成本函数都是从未知分布中得出的,该分布在整个时间内都是非平稳的。决策者的目的是最大化受预算限制的累积奖励。该配方捕获了广泛的应用程序,包括在线线性编程和网络收入管理等。在本文中,我们考虑了两个设置:(i)一个数据驱动的设置,其中真实分布未知,但可以提供先前的估计(可能不准确); (ii)一个不信息的环境,其中真实分布是完全未知的。我们提出了一项基于统一的浪费距离措施,以量化设置(i)中先验估计值的不准确性和设置(ii)中系统的非平稳性。我们表明,拟议的措施导致在两种情况下都能获得统一后悔的必要条件。对于设置(i),我们提出了一种新的算法,该算法采用了原始的偶视角,并将基础分布的先前信息集成到双重空间中的在线梯度下降过程。该算法也自然扩展到非信息设置(II)。在这两种设置下,我们显示相应的算法实现了最佳秩序的遗憾。在数值实验中,我们演示了如何将所提出的算法与重新溶解技术自然整合,以进一步提高经验性能。
translated by 谷歌翻译
We present CodeBERT, a bimodal pre-trained model for programming language (PL) and natural language (NL). CodeBERT learns general-purpose representations that support downstream NL-PL applications such as natural language code search, code documentation generation, etc. We develop Code-BERT with Transformer-based neural architecture, and train it with a hybrid objective function that incorporates the pre-training task of replaced token detection, which is to detect plausible alternatives sampled from generators. This enables us to utilize both "bimodal" data of NL-PL pairs and "unimodal" data, where the former provides input tokens for model training while the latter helps to learn better generators. We evaluate CodeBERT on two NL-PL applications by fine-tuning model parameters. Results show that CodeBERT achieves state-of-the-art performance on both natural language code search and code documentation generation. Furthermore, to investigate what type of knowledge is learned in CodeBERT, we construct a dataset for NL-PL probing, and evaluate in a zero-shot setting where parameters of pre-trained models are fixed. Results show that CodeBERT performs better than previous pre-trained models on NL-PL probing. 1
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译